110 research outputs found
GCN-RL Circuit Designer: Transferable Transistor Sizing with Graph Neural Networks and Reinforcement Learning
Automatic transistor sizing is a challenging problem in circuit design due to
the large design space, complex performance trade-offs, and fast technological
advancements. Although there has been plenty of work on transistor sizing
targeting on one circuit, limited research has been done on transferring the
knowledge from one circuit to another to reduce the re-design overhead. In this
paper, we present GCN-RL Circuit Designer, leveraging reinforcement learning
(RL) to transfer the knowledge between different technology nodes and
topologies. Moreover, inspired by the simple fact that circuit is a graph, we
learn on the circuit topology representation with graph convolutional neural
networks (GCN). The GCN-RL agent extracts features of the topology graph whose
vertices are transistors, edges are wires. Our learning-based optimization
consistently achieves the highest Figures of Merit (FoM) on four different
circuits compared with conventional black-box optimization methods (Bayesian
Optimization, Evolutionary Algorithms), random search, and human expert
designs. Experiments on transfer learning between five technology nodes and two
circuit topologies demonstrate that RL with transfer learning can achieve much
higher FoMs than methods without knowledge transfer. Our transferable
optimization method makes transistor sizing and design porting more effective
and efficient.Comment: Accepted to the 57th Design Automation Conference (DAC 2020); 6
pages, 8 figure
Hyperspectral Target Detection Based on Low-Rank Background Subspace Learning and Graph Laplacian Regularization
Hyperspectral target detection is good at finding dim and small objects based
on spectral characteristics. However, existing representation-based methods are
hindered by the problem of the unknown background dictionary and insufficient
utilization of spatial information. To address these issues, this paper
proposes an efficient optimizing approach based on low-rank representation
(LRR) and graph Laplacian regularization (GLR). Firstly, to obtain a complete
and pure background dictionary, we propose a LRR-based background subspace
learning method by jointly mining the low-dimensional structure of all pixels.
Secondly, to fully exploit local spatial relationships and capture the
underlying geometric structure, a local region-based GLR is employed to
estimate the coefficients. Finally, the desired detection map is generated by
computing the ratio of representation errors from binary hypothesis testing.
The experiments conducted on two benchmark datasets validate the effectiveness
and superiority of the approach. For reproduction, the accompanying code is
available at https://github.com/shendb2022/LRBSL-GLR.Comment: 4 pages, 3 figures, 1 tabl
Controlling single rare earth ion emission in an electro-optical nanocavity
Rare earth emitters enable critical quantum resources including spin qubits,
single photon sources, and quantum memories. Yet, probing of single ions
remains challenging due to low emission rate of their intra-4f optical
transitions. One feasible approach is through Purcell enhanced emission in
optical cavities. The ability to modulate cavity-ion coupling in real time will
further elevate the capacity of such systems. Here, we demonstrate direct
control of single ion emission by embedding erbium dopants in an
electro-optically active photonic crystal cavity patterned from thin-film
lithium niobate. Purcell factor over 170 enables single ion detection, which is
verified by second-order autocorrelation measurement. Dynamic control of
emission rate is realized by leveraging electro-optic tuning of resonance
frequency. Using this feature, storage and retrieval of single ion excitation
is further demonstrated, without perturbing the emission characteristics. These
results promise new opportunities for controllable single photon sources and
efficient spin-photon interfaces
Cultivation of Scenedesmus dimorphus for C/N/P removal and lipid production
Background: CO2 emission,water pollution and petroleumshortage are the
issues comingwith the development of industry. A cost effective system
was constructed to fix the CO2 in flue gas (15% CO2), remove nitrogen
and phosphorus from manure wastewater and produce biofuels at the same
time. The significant cultivation conditions were selected by
Plackett\u2013Burman design, and then optimized with central composite
design. Results: Optimum culture condition was predicted at light
intensity of 238 \u3bcmol\ub7m-2\ub7s-1, TN of 152 mg\ub7L-1,
inoculum density of 0.3 g\ub7L-1, under which the measured CO2
fixation rate, total nitrogen and phosphorus removing rate, and lipid
content were 638.13 mg\ub7L-1\ub7d-1, 88.16%, 73.98% and 11.9%,
respectively. The lipid content was then enhanced to 24.2% by a
nitrogen starvation strategy. Conclusion: A cultivation strategy was
suggested to achieve effective C/N/P removal from flue gas and manure
wastewater, and meanwhile obtained high lipid content from microalgal
biomass
MiliPoint: A Point Cloud Dataset for mmWave Radar
Millimetre-wave (mmWave) radar has emerged as an attractive and
cost-effective alternative for human activity sensing compared to traditional
camera-based systems. mmWave radars are also non-intrusive, providing better
protection for user privacy. However, as a Radio Frequency (RF) based
technology, mmWave radars rely on capturing reflected signals from objects,
making them more prone to noise compared to cameras. This raises an intriguing
question for the deep learning community: Can we develop more effective point
set-based deep learning methods for such attractive sensors?
To answer this question, our work, termed MiliPoint, delves into this idea by
providing a large-scale, open dataset for the community to explore how mmWave
radars can be utilised for human activity recognition. Moreover, MiliPoint
stands out as it is larger in size than existing datasets, has more diverse
human actions represented, and encompasses all three key tasks in human
activity recognition. We have also established a range of point-based deep
neural networks such as DGCNN, PointNet++ and PointTransformer, on MiliPoint,
which can serve to set the ground baseline for further development
An Adaptive Resilience Testing Framework for Microservice Systems
Resilience testing, which measures the ability to minimize service
degradation caused by unexpected failures, is crucial for microservice systems.
The current practice for resilience testing relies on manually defining rules
for different microservice systems. Due to the diverse business logic of
microservices, there are no one-size-fits-all microservice resilience testing
rules. As the quantity and dynamic of microservices and failures largely
increase, manual configuration exhibits its scalability and adaptivity issues.
To overcome the two issues, we empirically compare the impacts of common
failures in the resilient and unresilient deployments of a benchmark
microservice system. Our study demonstrates that the resilient deployment can
block the propagation of degradation from system performance metrics (e.g.,
memory usage) to business metrics (e.g., response latency). In this paper, we
propose AVERT, the first AdaptiVE Resilience Testing framework for microservice
systems. AVERT first injects failures into microservices and collects available
monitoring metrics. Then AVERT ranks all the monitoring metrics according to
their contributions to the overall service degradation caused by the injected
failures. Lastly, AVERT produces a resilience index by how much the degradation
in system performance metrics propagates to the degradation in business
metrics. The higher the degradation propagation, the lower the resilience of
the microservice system. We evaluate AVERT on two open-source benchmark
microservice systems. The experimental results show that AVERT can accurately
and efficiently test the resilience of microservice systems
Generative AI for Integrated Sensing and Communication: Insights from the Physical Layer Perspective
As generative artificial intelligence (GAI) models continue to evolve, their
generative capabilities are increasingly enhanced and being used extensively in
content generation. Beyond this, GAI also excels in data modeling and analysis,
benefitting wireless communication systems. In this article, we investigate
applications of GAI in the physical layer and analyze its support for
integrated sensing and communications (ISAC) systems. Specifically, we first
provide an overview of GAI and ISAC, touching on GAI's potential support across
multiple layers of ISAC. We then concentrate on the physical layer,
investigating GAI's applications from various perspectives thoroughly, such as
channel estimation, and demonstrate the value of these GAI-enhanced physical
layer technologies for ISAC systems. In the case study, the proposed diffusion
model-based method effectively estimates the signal direction of arrival under
the near-field condition based on the uniform linear array, when antenna
spacing surpassing half the wavelength. With a mean square error of 1.03
degrees, it confirms GAI's support for the physical layer in near-field sensing
and communications
Semantic Communications for Wireless Sensing: RIS-aided Encoding and Self-supervised Decoding
Semantic communications can reduce the resource consumption by transmitting
task-related semantic information extracted from source messages. However, when
the source messages are utilized for various tasks, e.g., wireless sensing data
for localization and activities detection, semantic communication technique is
difficult to be implemented because of the increased processing complexity. In
this paper, we propose the inverse semantic communications as a new paradigm.
Instead of extracting semantic information from messages, we aim to encode the
task-related source messages into a hyper-source message for data transmission
or storage. Following this paradigm, we design an inverse semantic-aware
wireless sensing framework with three algorithms for data sampling,
reconfigurable intelligent surface (RIS)-aided encoding, and self-supervised
decoding, respectively. Specifically, on the one hand, we propose a novel RIS
hardware design for encoding several signal spectrums into one MetaSpectrum. To
select the task-related signal spectrums for achieving efficient encoding, a
semantic hash sampling method is introduced. On the other hand, we propose a
self-supervised learning method for decoding the MetaSpectrums to obtain the
original signal spectrums. Using the sensing data collected from real-world, we
show that our framework can reduce the data volume by 95% compared to that
before encoding, without affecting the accomplishment of sensing tasks.
Moreover, compared with the typically used uniform sampling scheme, the
proposed semantic hash sampling scheme can achieve 67% lower mean squared error
in recovering the sensing parameters. In addition, experiment results
demonstrate that the amplitude response matrix of the RIS enables the
encryption of the sensing data
Text Is All You Need: Learning Language Representations for Sequential Recommendation
Sequential recommendation aims to model dynamic user behavior from historical
interactions. Existing methods rely on either explicit item IDs or general
textual features for sequence modeling to understand user preferences. While
promising, these approaches still struggle to model cold-start items or
transfer knowledge to new datasets. In this paper, we propose to model user
preferences and item features as language representations that can be
generalized to new items and datasets. To this end, we present a novel
framework, named Recformer, which effectively learns language representations
for sequential recommendation. Specifically, we propose to formulate an item as
a "sentence" (word sequence) by flattening item key-value attributes described
by text so that an item sequence for a user becomes a sequence of sentences.
For recommendation, Recformer is trained to understand the "sentence" sequence
and retrieve the next "sentence". To encode item sequences, we design a
bi-directional Transformer similar to the model Longformer but with different
embedding layers for sequential recommendation. For effective representation
learning, we propose novel pretraining and finetuning methods which combine
language understanding and recommendation tasks. Therefore, Recformer can
effectively recommend the next item based on language representations.
Extensive experiments conducted on six datasets demonstrate the effectiveness
of Recformer for sequential recommendation, especially in low-resource and
cold-start settings.Comment: accepted to KDD 202
- …